Skip to content

fix(fiber): bound runner memory under sustained Fibre load#3306

Open
walldiss wants to merge 4 commits intoevstack:julien/fiberfrom
walldiss:pr2-runner-memory-bound
Open

fix(fiber): bound runner memory under sustained Fibre load#3306
walldiss wants to merge 4 commits intoevstack:julien/fiberfrom
walldiss:pr2-runner-memory-bound

Conversation

@walldiss
Copy link
Copy Markdown

@walldiss walldiss commented May 2, 2026

Issue

Running the talis Fibre throughput experiment with sustained evnode-txsim load reproducibly OOM-killed the evnode-fibre daemon at anon-rss ≈ 63.7 GiB within 30–60 seconds, on a c6in.8xlarge box. Heap profiles taken just before the kill showed:

  • 24.6 GiB held in io.ReadAll allocations from the /tx HTTP handler — request bodies queued in the sequencer waiting to be turned into blocks
  • 14.4 GiB held in cache.PendingData.GetPendingData — DA-pending blob copies

Two independent unbounded data structures were the cause:

  1. SoloSequencer.queue had no upper bound. SubmitBatchTxs did s.queue = append(s.queue, txs...) unconditionally. Whenever /tx ingest exceeded the 1-block-per-second drain rate (trivially true with a 32-vCPU loadgen pushing >100 MB/s), the queue grew monotonically until OOM.
  2. inMemExecutor.FilterTxs ignored its maxBytes parameter and returned FilterOK for every tx unconditionally. The sequencer asked for 100 MiB worth, the executor answered "take everything", a single block ballooned to 369 MiB, the submitter saw a single item exceeding the per-blob cap and halted the daemon with single item exceeds DA blob size limit.

The runner's inMemExecutor ingest queue (txChan capacity 500, maxBlockTxs 500) was also too small for the ~50 MB/s sustained ingest the experiment targets — 500 slots fill in ~50 ms at 10K tx/s and turn /tx into the binding constraint at ~22 MB/s rather than DA upload.

Solution

  • SoloSequencer.SetMaxQueueBytes(n) with all-or-nothing admission. If an incoming batch would push queueBytes > n, the whole batch is rejected with the exported ErrQueueFull and the queue keeps its current contents (partial admission would force the reaper to track which prefix succeeded; whole-batch lets the reaper just retry later when the queue has drained).
  • Reaper (block/internal/reaping/reaper.go) matches ErrQueueFull via errors.Is and treats it as transient backpressure: marks the rejected hashes as "seen" so we don't waste cycles re-hashing them every tick, logs a warn line with the dropped count, and continues. Without this match the existing fatal-on-submit-error path tears the daemon down on the first queue-full event.
  • inMemExecutor.FilterTxs now walks the input txs in arrival order and accumulates against maxBytes; once the budget would be exceeded the rest are returned as FilterPostpone so the sequencer puts them back on its queue.
  • evnode-fibre runner: after constructing the sequencer, call SetMaxQueueBytes with 10 × MaxBlobSize (= 1 GiB at 100 MiB blobs); lift inMemExecutor.txChan and maxBlockTxs from 500 to 10000 (~100 MiB at 10 KB tx-size).
  • pkg/config.ApplyFiberDefaults: MaxPendingHeadersAndData = 50 → 10. With Fibre's blobs up to 100 MiB and 3-FSP fan-out plus per-attempt retry buffers, 50 in-flight × 3 × retries crosses 64 GiB. 10 keeps the in-flight footprint bounded while still letting healthy uploads pipeline.

Test plan

  • Sustained 5-min txsim at --concurrency 32 no longer OOMs (RSS stable ~10 GiB)
  • No single item exceeds DA blob size limit halts under load
  • Sequencer queue-full backpressure observed in logs as sequencer queue full, dropping txs (backpressure) instead of fatal errors
  • Existing solo + reaping + submitting test suites pass

walldiss added 4 commits May 3, 2026 01:42
Under sustained ingest above the block-production drain rate,
SoloSequencer.queue grew monotonically. A 32-vCPU loadgen pushing
>100 MB/s into a runner whose executor drains ~100 MB/s per block
filled the queue at ~150 MB/s of net-positive growth — heap
profiles showed 24 GB of retained io.ReadAll bytes in the queue
within ~30 s, then anon-rss:63GB OOM-kill at the box's 64 GiB
ceiling. Reproducible twice with identical signature.

Two changes, one feature:

- SoloSequencer.SetMaxQueueBytes(n) caps the queue's total
  retained tx bytes. SubmitBatchTxs uses all-or-nothing admission
  against the cap: if the incoming batch would push us over, the
  whole batch is rejected with ErrQueueFull and the queue keeps
  its current contents untouched. Partial admission would force
  the caller to track which prefix succeeded and only re-feed the
  suffix on retry; the reaper currently doesn't do that, so the
  whole-batch rule lets the reaper just retry the same batch
  later when the queue has drained. queueBytes is decremented
  on drain (queue := nil) and re-counted for postponed txs that
  the executor's FilterTxs returns to the queue. Zero cap = the
  legacy unbounded path, preserved for tests and small
  deployments.

- The reaper bridging executor mempool → sequencer matches
  ErrQueueFull via errors.Is and treats it as transient
  backpressure: marks the rejected hashes as "seen" so the
  next reaper tick doesn't re-hash + re-submit the same already-
  rejected txs forever, logs a warn line with the dropped count,
  and continues running. Without this match every queue-full
  event would tear the daemon down via the existing fatal-on-
  submit-error path.

Loadgen sees the backpressure indirectly: with the sequencer
queue full, the executor's txChan stops draining, /tx blocks on
its bounded channel send, and txsim observes 5xx / timeouts —
cleanly applied at the application layer instead of via the
kernel OOM-killer.
The stub executor used by the runner returned FilterOK for
every transaction unconditionally, ignoring the maxBytes
budget plumbed through SoloSequencer.GetNextBatch. Under
sustained txsim load (~50 MiB/s, 8 concurrent senders) the
mempool would accumulate ~50K txs while a 100 MiB upload
was in flight; on the next batch the sequencer drained
ALL of them into one block (~369 MiB raw), the submitter
saw a single item exceeding the per-blob cap, and halted
the node with `single item exceeds DA blob size limit`.

Walk the input txs in arrival order, accumulate sizes
against maxBytes, and return FilterPostpone past the
budget so the sequencer puts the overflow back on its
queue. Verified live: blocks now cap at ~10K txs / ~100
MiB and evnode sustains 58.77 MB/s DA upload throughput
through a 5-min txsim run with zero crashes (was 0 →
crash within 30 s before this fix).
Two runner-side changes paired with the SoloSequencer bound:

- After constructing the SoloSequencer, call SetMaxQueueBytes
  with 10× the per-block tx budget (= 1 GiB at the current 100
  MiB MaxBlobSize). 10× is the sweet spot: large enough that a
  short burst above steady-state ingest doesn't trigger
  backpressure (we want to absorb that), small enough that the
  worst-case retained bytes fit comfortably under the box's
  RAM budget alongside the pending cache + DA in-flight buffers.

- Lift the inMemExecutor's hardcoded ingest caps. txChan and
  maxBlockTxs were sized at 500 (5 MB / 5K txs per reaper poll)
  back when those were the only memory bound on the runner. With
  the SetMaxQueueBytes cap and the FilterTxs-enforced per-block
  budget now actually doing the bounding, the ingest queue can
  hold a full 100 MiB block-worth of txs (10K slots at 10 KB)
  without burdening memory — and a single reaper poll can
  drain that whole batch in one GetTxs call instead of
  needing 20× cycles. This was the binding constraint at
  ~5,000 tx/s = 50 MB/s in earlier runs.
ApplyFiberDefaults set MaxPendingHeadersAndData=50, but each pending
data item under Fiber is up to MaxBlobSize (~100 MiB raw). With
3-FSP fan-out and per-attempt retry buffers in flight, 50 items × 3
× retries crossed 64 GiB on c6in.8xlarge under sustained txsim load
and the kernel OOM-killed evnode 30 s into the run.

10 keeps the in-flight footprint bounded while still letting healthy
uploads pipeline against the actual Fibre RPC latency. Verified by
heap profiling: pending pause at ~ 10 × 100 MiB plus fan-out keeps
RSS below ~10 GiB, evnode runs indefinitely.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 2, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 19dccb1e-88cb-4d25-9b4a-3ca130106cf0

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small comment on solo, i'd rather see this extracted to main (with a changelog).

Id: []byte(r.chainID),
Batch: &coresequencer.Batch{Transactions: newTxs},
})
if errors.Is(err, solo.ErrQueueFull) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should never be brought to main. Those txs will effectively be lost until the tx cache clears.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solo was made to be very very simple (basically single without all the bells and whistles), but adding a small size contraint makes sense.

Could you clean it up and bring this to main instead? We can then merge main in the fiber branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants